In the latest episode of the Dinis Guarda Podcast, Rev Lebaredian Vice President of Omniverse and Simulation Technology at NVIDIA discussesthe transformative potential of AI and simulation, the future of robotics and superhuman-specific intelligence. The podcast is powered by Businessabc.net, Citiesabc.com, Wisdomia.ai, and Sportsabc.org.
Rev Lebaredian is a technology expert, entrepreneur, and business leader who is the Vice President of Omniverse and simulation technology at NVIDIA. He leads the development of NVIDIA Omniverse, a platform that combines rendering, physics simulation, and artificial intelligence to create realistic virtual worlds. Rev has helped in developing technologies like robotics simulation (Isaac Sim), in-game photography (Ansel), and real-time physics simulation (PhysX) at Nvidia. He also founded Steamboat Software, creating tools to help teams work better together. .
During the interview with Dinis Guarda, Rev Lebaredian explains the transformative potential of AI and simulation in bridging the physical and digital worlds:
“This new technology—AI, the ability to manufacture intelligence, and combining that with the physical world—is even greater than the introduction of the internet. The value creation is going to be much greater, no doubt about that.
What we are focusing on with simulation is creating a bridge—turning the physical world into a computing problem. This allows us to solve problems at the speed and efficiency of software. The goal is to move the physical world forward with the same transformative potential as software.
It feels like the early days of the internet. Nobody accurately predicted how valuable the internet would become or the applications it would enable. Similarly, combining AI with the physical world will create even greater value than the internet. While we don’t know the exact shape of things 20 years from now, our strategy is to take it step by step, building the necessary technologies to reach the next milestone.”
NVIDIA shaping the new era of AI: Computing, the Physical World, and OpenUSD
Rev Lebaredian shares NVIDIA’s vision for the future and how combining AI, simulation, and computing can bring the digital and physical worlds closer together:
“What we are focusing on with simulation is essentially making a bridge. We’re turning the physical world into a computing problem, so we can solve the challenges of the physical world with the speed and efficiency of software.”
“We were willing to invest in this deeply for a long time on this conviction. Most other companies won’t or can’t do that kind of investment.
When machine learning, deep learning, this new kind of AI thing, appeared on our platform, we understood what it meant. On first principles, we went all in. We asked, ‘What are the things we need to build to bring about the future? How do we capitalise on this?’ The first thing was, that we needed to take these computers we were building and make them really big—supercomputers at the scale of data centers. Nobody was asking for it at the time, but we believed it was necessary.
We also believed that with this new form of software development—machine learning and AI—there were algorithms we could now develop that were previously out of reach. The most valuable ones would be the ones that interact with our physical world—algorithms that perceive the world around us and manipulate it. Essentially, robotics algorithms. The promise has always been there, but it’s just been out of reach.
If we’re going to create AIs that understand and manipulate the world around us, those AIs are going to have to learn from some examples. Where are those examples going to come from? It’s hard gathering them from the real world, so we should turn the real world into software—into a computing problem. Once we have it inside a computer, that data can feed our AI systems, and we can eventually solve that.”
Rev discusses how NVIDIA is revolutionising physical simulation through OpenUSD and AI technologies:
“OpenUSD (Universal Scene Description) enables a unified representation of complex physical systems, integrating data from CAD tools, simulation tools, and real-world captures. For nearly a decade, we’ve worked on standardising this format, and last year, we formed the OpenUSD Alliance with founding members Pixar, Disney, Apple, Adobe, Autodesk, and Nvidia. Now, over 30 companies, including industry giants like Siemens, Trimble, Ikea, and Lowe’s, have joined.
To simulate a factory, for example, you need a single data model that includes all its components—buildings, conveyor systems, robots, sensors, humans, and their properties. This has been a challenge because these data sources have historically been fragmented across different tools and formats. OpenUSD solves this by providing a universal way to represent all these layers of complexity in one model.”
By combining OpenUSD with AI technologies and NVIDIA’s advanced computing capabilities, we now have the ability to perform rich, complete simulations of the physical world fast enough to be practical. This is necessary for building all the complex physical systems we need for the world around us.”
NVIDIA Omniverse: Next industrial revolution in AI
NVIDIA Omniverse is a platform designed to improve 3D workflows by enabling real-time collaboration and simulation. It is built on Universal Scene Description (OpenUSD) and uses NVIDIA’s RTX rendering and AI technologies to connect different 3D tools. The platform supports advanced simulations for creating virtual environments and digital twins, which are useful in industries like film, robotics, and autonomous vehicles.
NVIDIA Omniverse also includes AI-driven features, such as tools for generating realistic animations. A recent update introduced 3D workflows, making the platform even more powerful for industries.
During the interview with Dinis Guarda, Rev Lebaredian explains the origin of NVIDIA Omniverse and its fundamental technologies: physical AI and simulation:
“Almost a decade ago, we decided to start building all the simulation technologies necessary to do this, and the computers for it, because we knew it was coming. When the AI explosion happened, the next wave would be physical AI—essentially robotics. To enable physical AI, we needed simulators and simulation computers that could represent the real world inside a computer. That’s where Omniverse came in.
Omniverse is loosely like an operating system that consists of a bunch of functionality to assemble virtual worlds and simulate them. By itself, it’s not actually a simulator or application—it’s more like an operating system with functionality others can build on top of or incorporate into their applications and technologies. We work with almost all tools providers and simulation developers in the ecosystem to make this happen.”
Rev Lebaredian also discusses why Omniverse is essential for physical AI:
“We realised the only way to generate the data—these life experiences for physical AIs—is by creating simulators. By turning the physical world into a software problem, we can generate unlimited data safely, with diversity, and at speeds impossible in the real world. The only limitation is the amount of computing we have available.”
Talking about Omniverse’s role in the Next Industrial Revolution Rev said :
“We believe the next industrial revolution is on its way. It’s about having autonomous agents—robots and robotic systems—operating in the physical world, alongside us, doing things humans possibly can’t. These systems will transform factories, warehouses, supply chains, transportation, cities, and buildings. To build that future, we need two fundamental technologies: AI and simulation.”
The future of AI: Humanoid robots redefining innovation
As the interview continues, Rev Lebaredian explains that robots are more than humanoid figures from science fiction. He said:
“The term robot applies to a much more general set of things than the humanoid forms we often imagine. A robot, or a robotic system, is essentially an autonomous agent that operates in the physical world. It perceives the world through sensors, makes decisions and plans, and then acts to modify the world. It does this in a continuous loop: perceiving, reasoning, and acting.
Robotic systems are increasing everywhere now. For example, in our NVIDIA headquarters, the turnstiles at the reception area perceive you, decide if you have access, and then act to let you through. These systems are becoming more common, but we haven’t seen robots deployed at scale for several reasons.
The spaces we’ve built—retail stores, factories, homes, kitchens—are all designed for humans. They’re configured for beings that are roughly our size, with two arms and two legs, able to navigate stairs and ramps. The most useful general-purpose robots are likely humanoids because they can operate in these human-designed spaces and do useful things.
Since ChatGPT, there’s been a massive acceleration in R&D investment in humanoid robots. Companies like Tesla and many others are pursuing this because now we have the technology to make a general-purpose robot brain. Humanoids are the logical form for general-purpose robots because they fit seamlessly into human environments.”
Rev also explains the historical challenges of building robots:
“The reason we haven’t had robots at scale is that building a robust robot brain has been too hard. Programming them to handle unexpected situations in the real world took so much effort that it wasn’t worthwhile. That’s why robots have largely been limited to automotive factories, where the conditions are controlled and the capital costs can be justified.
Now we finally have the ability to create good robot brains. The ‘ChatGPT moment’ unlocked the imagination of what’s possible. We now understand that we can create a general-purpose intelligence capable of reasoning about many things, and this core technology is the foundation for a useful robot brain.”
Then he talked about the future of robots in human spaces and said:
“Humanoid robots will be the first to be deployed at scale. It’s going to take time to reconfigure spaces optimised for specific robots. Eventually, we’ll see lights-out factories and warehouses with robots built specifically for those environments. But for now, most spaces are designed for humans, making humanoid robots the best fit for these existing spaces.
We’ve probably had the technology to build the mechanical and physical parts of humanoid robots for a while, but without the brain to unlock their capabilities, there was no point. Now that we have the capability to build a brain, we’re seeing an acceleration that’s going to change everything.”
Rev then talked about superhuman-specific intelligence and its potential and said:
“What’s true is that for a long time now, we’ve had specific intelligence—not general intelligence—but specific intelligence that’s superhuman. When we first solved the image classification problem with AlexNet, it wasn’t quite superhuman. But within a year or so, image classifiers could distinguish subtle differences between breeds of dogs better than any human could. We’ve reached superintelligence in many specific areas, like transcription, language translation, and more.
This is what excites me—the fact that we have forms of intelligence that are specific but superhuman. As an engineer, it excites me because I start asking, what can we do with these things to make human life better?
A lot of the stuff that was just a dream or science for a while has now become practical engineering. We’re focused on the pragmatic—what can we do with these amazing new superpowers we’ve created? Let’s go put them to good use.”
Shikha Negi is a Content Writer at ztudium with expertise in writing and proofreading content. Having created more than 500 articles encompassing a diverse range of educational topics, from breaking news to in-depth analysis and long-form content, Shikha has a deep understanding of emerging trends in business, technology (including AI, blockchain, and the metaverse), and societal shifts, As the author at Sarvgyan News, Shikha has demonstrated expertise in crafting engaging and informative content tailored for various audiences, including students, educators, and professionals.